993 resultados para Regression logistic


Relevância:

80.00% 80.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62J12, 62P10.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O presente trabalho tem o propósito de analisar o processo de fidelização do consumidor corporativo no mercado de comunicação móvel celular. Serão investigadas as práticas comerciais da operadora de telefonia móvel celular Vivo, sob a ótica das teorias propostas. Para que esta análise ocorra será realizada uma pesquisa quantitativa com 120 empresas, divididas entre clientes e ex-clientes, entrevistas em profundidade com 08 empresas, também dividas entre clientes e ex-clientes e entrevistas em profundidade com os executivos responsáveis pela arquitetura das estratégias comerciais. Baseado no resultado da pesquisa, das entrevistas e da análise das teorias propostas, este estudo pretende apontar as práticas de marketing que podem gerar lealdade nos clientes empresariais de pequeno e médio porte no mercado de telefonia móvel celular.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Há poucos dados sistematizados sobre eventos adversos da vacina contra influenza no Brasil. Este trabalho visou identificar estes eventos em população acima de 60 anos que compareceu à Campanha Nacional de Vacinação do Idoso, em Distrito de Campinas, SP, em 2000. Foi realizada entrevista para relato de sintomas gerais e locais, com nexo temporal após a aplicação do imunobiológico, em amostra aleatória sistemática da população (n=206). Registraram-se 20,38% (IC 14,87-25,88) dos indivíduos com um ou mais sintomas, sendo a dor no local da vacina, a mais freqüente 12,6% (IC 8,09-17,15). Ajustou-se um modelo de regressão logística múltipla, tendo como variável dependente, a ocorrência de pelo menos um evento adverso. A variável independente que se mostrou associada às reações adversas foi o sexo (feminino) (OR=5,89 e IC 2,08-16,68). Os achados deste estudo reafirmam a pequena reatogenicidade da vacina contra a influenza.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

FUNDAMENTO: A hipertensão arterial constitui um grave problema de saúde pública, atingindo 20% a 25% da população adulta mundial e 12% a 35% da população brasileira. OBJETIVO: Avaliar associação entre hipertensão arterial e excesso de peso. MÉTODOS: Estudo transversal desenvolvido em 2005, com uma amostra probabilística da população >18 anos de Belém (PA), pelo SIMTEL (monitoramento de doenças crônicas por telefone). Considerou-se como variável desfecho: hipertensão; como variável explanatória: excesso de peso; como variáveis de confusão: idade, escolaridade e características de estilo de vida. As variáveis associadas com hipertensão arterial foram analisadas por regressão logística para cálculo de risco. RESULTADOS: A hipertensão arterial atingiu 16,2% dos homens e 18,3% das mulheres, e o excesso de peso, 49,3% e 34,0%, respectivamente. A prevalência de hipertensão arterial se associou diretamente com idade e com excesso de peso em ambos os sexos. Para os homens, se associou com consumo de frutas e hortaliças e baixo consumo de feijão; para as mulheres, com estado civil viúva ou separada e, inversamente, com escolaridade. O risco de hipertensão arterial aumentou com o peso em ambos os sexos (p<0,001), sendo na obesidade 6,33 vezes maior para os homens e 3,33 para as mulheres, comparativamente ao peso normal. CONCLUSÃO: O excesso de peso se associou com maior prevalência de hipertensão arterial, porém variáveis como idade, escolaridade e consumo alimentar interferem nessa associação, de modo a configurar contextos favoráveis à diminuição ou ao aumento desse risco.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model (Hosmer & Lemeshow, 1989) and a logistic regression with state-dependent sample selection model (Cramer, 2004) applied to simulated data. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian bank portfolio. Our simulation results so far revealed that there is no statistically significant difference in terms of predictive capacity between the naive logistic regression models and the logistic regression with state-dependent sample selection models. However, there is strong difference between the distributions of the estimated default probabilities from these two statistical modeling techniques, with the naive logistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Assessments of environmental and territorial justice are similar in that both assess whether empirical relations between the spatial arrangement of undesirable hazards (or desirable public goods and services) and socio-demographic groups are consistent with notions of social justice, evaluating the spatial distribution of benefits and burdens (outcome equity) and the process that produces observed differences (process equity. Using proximity to major highways in NYC as a case study, we review methodological issues pertinent to both fields and discuss choice and computation of exposure measures, but focus primarily on measures of inequity. We present inequity measures computed from the empirically estimated joint distribution of exposure and demographics and compare them to traditional measures such as linear regression, logistic regression and Theil’s entropy index. We find that measures computed from the full joint distribution provide more unified, transparent and intuitive operational definitions of inequity and show how the approach can be used to structure siting and decommissioning decisions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Numerous expert elicitation methods have been suggested for generalised linear models (GLMs). This paper compares three relatively new approaches to eliciting expert knowledge in a form suitable for Bayesian logistic regression. These methods were trialled on two experts in order to model the habitat suitability of the threatened Australian brush-tailed rock-wallaby (Petrogale penicillata). The first elicitation approach is a geographically assisted indirect predictive method with a geographic information system (GIS) interface. The second approach is a predictive indirect method which uses an interactive graphical tool. The third method uses a questionnaire to elicit expert knowledge directly about the impact of a habitat variable on the response. Two variables (slope and aspect) are used to examine prior and posterior distributions of the three methods. The results indicate that there are some similarities and dissimilarities between the expert informed priors of the two experts formulated from the different approaches. The choice of elicitation method depends on the statistical knowledge of the expert, their mapping skills, time constraints, accessibility to experts and funding available. This trial reveals that expert knowledge can be important when modelling rare event data, such as threatened species, because experts can provide additional information that may not be represented in the dataset. However care must be taken with the way in which this information is elicited and formulated.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The benefits of applying tree-based methods to the purpose of modelling financial assets as opposed to linear factor analysis are increasingly being understood by market practitioners. Tree-based models such as CART (classification and regression trees) are particularly well suited to analysing stock market data which is noisy and often contains non-linear relationships and high-order interactions. CART was originally developed in the 1980s by medical researchers disheartened by the stringent assumptions applied by traditional regression analysis (Brieman et al. [1984]). In the intervening years, CART has been successfully applied to many areas of finance such as the classification of financial distress of firms (see Frydman, Altman and Kao [1985]), asset allocation (see Sorensen, Mezrich and Miller [1996]), equity style timing (see Kao and Shumaker [1999]) and stock selection (see Sorensen, Miller and Ooi [2000])...

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper gives a new iterative algorithm for kernel logistic regression. It is based on the solution of a dual problem using ideas similar to those of the Sequential Minimal Optimization algorithm for Support Vector Machines. Asymptotic convergence of the algorithm is proved. Computational experiments show that the algorithm is robust and fast. The algorithmic ideas can also be used to give a fast dual algorithm for solving the optimization problem arising in the inner loop of Gaussian Process classifiers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Elastic Net Regularizers have shown much promise in designing sparse classifiers for linear classification. In this work, we propose an alternating optimization approach to solve the dual problems of elastic net regularized linear classification Support Vector Machines (SVMs) and logistic regression (LR). One of the sub-problems turns out to be a simple projection. The other sub-problem can be solved using dual coordinate descent methods developed for non-sparse L2-regularized linear SVMs and LR, without altering their iteration complexity and convergence properties. Experiments on very large datasets indicate that the proposed dual coordinate descent - projection (DCD-P) methods are fast and achieve comparable generalization performance after the first pass through the data, with extremely sparse models.